648 research outputs found

    Pattern of initiation of monomorphic ventricular tachycardia in recorded intracardiac electrograms

    Get PDF
    Background: By analyzing stored intracardiac electrograms during spontaneous monomorphic ventricular tachycardia (VT), we examined the patterns of the VT initiation in a group of patients with implantable cardioverter defibrillators (ICDs). Methods: Stored electrograms (EGMs) were monomorphic VTs and at least 5 beats before the initiation and after the termination of VT were analyzed. Cycle length, sinus rate, and the prematurity index for each episode were noted. Results: We studied 182 episodes of VT among 50 patients with ICDs. VPC-induced (extrasystolic initiation) episode was the most frequent pattern (106; 58%) followed by 76 episodes (42%) in sudden-onset group. Among the VPC-induced group, VPCs in 85 episodes (80%) were different in morphology from subsequent VT. Sudden-onset episodes had longer cycle lengths (377±30ms) in comparison with the VPC-induced ones (349±29ms; P= 0.001). Sinus rate before VT was faster in the sudden-onset compared to that in VPC-induced one (599±227ms versus 664±213ms; P=0.005). Both of these episodes responded similarly to ICD tiered therapy. There was no statistically significant difference in coupling interval, prematurity index, underlying heart disease, ejection fraction, and antiarrhythmic drug usage between two groups (P=NS). Conclusions: Dissimilarities between VT initiation patterns could not be explained by differences in electrical (coupling interval, and prematurity index) or clinical (heart disease, ejection fraction, and antiarrhythmic drug) variables among the patients. There is no association between pattern of VT initiation and the success rate of electrical therapy

    Face modeling and animation language for MPEG-4 XMT framework

    Get PDF
    This paper proposes FML, an XML-based face modeling and animation language. FML provides a structured content description method for multimedia presentations based on face animation. The language can be used as direct input to compatible players, or be compiled within MPEG-4 XMT framework to create MPEG-4 presentations. The language allows parallel and sequential action description, decision-making and dynamic event-based scenarios, model configuration, and behavioral template definition. Facial actions include talking, expressions, head movements, and low-level MPEG-4 FAPs. The ShowFace and iFACE animation frameworks are also reviewed as example FML-based animation systems

    Socially expressive communication agents: A face-centric approach

    Get PDF
    Interactive Face Animation - Comprehensive Environment (iFACE) is a general purpose software framework that encapsulates the functionality of “face multimedia object”. iFACE exposes programming interfaces and provides authoring and scripting tools to design a face object, define its behaviors, and animate it through static or interactive situations. The framework is based on four parameterized spaces of Geometry, Mood, Personality, and Knowledge that together form the appearance and behavior of the face object. iFACE capabilities are demonstrated within the context of some artistic and educational projects

    Face as multimedia object

    Get PDF
    This paper proposes the Face Multimedia Object (FMO), and iFACE as a framework for implementing the face object within multimedia systems. FMO encapsulates all the functionality and data required for face animation. iFACE implements FMO and provides necessary interfaces for a variety of applications in order to access FMO services

    Emotional remapping of music to facial animation

    Get PDF
    We propose a method to extract the emotional data from a piece of music and then use that data via a remapping algorithm to automatically animate an emotional 3D face sequence. The method is based on studies of the emotional aspect of music and our parametric-based behavioral head model for face animation. We address the issue of affective communication remapping in general, i.e. translation of affective content (eg. emotions, and mood) from one communication form to another. We report on the results of our MusicFace system, which use these techniques to automatically create emotional facial animations from multiinstrument polyphonic music scores in MIDI format and a remapping rule set. ? ACM, 2006. This is the author\u27s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames, 143-149. Boston, Massachusetts: ACM. doi:10.1145/1183316.118333

    Socially communicative characters for interactive applications

    Get PDF
    Interactive Face Animation - Comprehensive Environment (iFACE) is a general-purpose software framework that encapsulates the functionality of “face multimedia object” for a variety of interactive applications such as games and online services. iFACE exposes programming interfaces and provides authoring and scripting tools to design a face object, define its behaviours, and animate it through static or interactive situations. The framework is based on four parameterized spaces of Geometry, Mood, Personality, and Knowledge that together form the appearance and behaviour of the face object. iFACE can function as a common “face engine” for design and runtime environments to simplify the work of content and software developers

    Affective communication remapping in MusicFace System

    Get PDF
    This paper addresses the issue of affective communication remapping, i.e. translation of affective content from one communication form to another. We propose a method to extract the affective data from a piece of music and then use that to animate a face. The method is based on studies of emotional aspect of music and our behavioural head model for face animation

    Multispace behavioral model for face-based affective social agents

    Get PDF
    This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, andmood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios

    Idiopathic Submitral Left Ventricular Aneurysm: an Unusual Substrate for Ventricular Tachycardia in Caucasians

    Get PDF
    Annular submitral aneurysms have been rarely reported in Caucasians. They are typically diagnosed in non-white adults who present with severe mitral regurgitation, heart failure, systemic embolism, ventricular arrhythmias, and sudden cardiac death. In this article, we describe the case of a white woman, presenting with ventricular tachycardia, who had a large submitral left ventricular aneurysm diagnosed incidentally during coronary angiography

    Inclusion in Virtual Reality Technology: A Scoping Review

    Full text link
    Despite the significant growth in virtual reality applications and research, the notion of inclusion in virtual reality is not well studied. Inclusion refers to the active involvement of different groups of people in the adoption, use, design, and development of VR technology and applications. In this review, we provide a scoping analysis of existing virtual reality research literature about inclusion. We categorize the literature based on target group into ability, gender, and age, followed by those that study community-based design of VR experiences. In the latter group, we focus mainly on Indigenous Peoples as a clearer and more important example. We also briefly review the approaches to model and consider the role of users in technology adoption and design as a background for inclusion studies. We identify a series of generic barriers and research gaps and some specific ones for each group, resulting in suggested directions for future research
    corecore